182 research outputs found

    CTL+FO Verification as Constraint Solving

    Full text link
    Expressing program correctness often requires relating program data throughout (different branches of) an execution. Such properties can be represented using CTL+FO, a logic that allows mixing temporal and first-order quantification. Verifying that a program satisfies a CTL+FO property is a challenging problem that requires both temporal and data reasoning. Temporal quantifiers require discovery of invariants and ranking functions, while first-order quantifiers demand instantiation techniques. In this paper, we present a constraint-based method for proving CTL+FO properties automatically. Our method makes the interplay between the temporal and first-order quantification explicit in a constraint encoding that combines recursion and existential quantification. By integrating this constraint encoding with an off-the-shelf solver we obtain an automatic verifier for CTL+FO

    Efficient CTL Verification via Horn Constraints Solving

    Get PDF
    The use of temporal logics has long been recognised as a fundamental approach to the formal specification and verification of reactive systems. In this paper, we take on the problem of automatically verifying a temporal property, given by a CTL formula, for a given (possibly infinite-state) program. We propose a method based on encoding the problem as a set of Horn constraints. The method takes a program, modeled as a transition system, and a property given by a CTL formula as input. It first generates a set of forall-exists quantified Horn constraints and well-foundedness constraints by exploiting the syntactic structure of the CTL formula. Then, the generated set of constraints are solved by applying an off-the-shelf Horn constraints solving engine. The program is said to satisfy the property if and only if the generated set of constraints has a solution. We demonstrate the practical promises of the method by applying it on a set of challenging examples. Although our method is based on a generic Horn constraint solving engine, it is able to outperform state-of-art methods specialised for CTL verification.Comment: In Proceedings HCVS2016, arXiv:1607.0403

    Streaming data processing with FairMQ

    Get PDF

    Track finding and fitting on GPUs, first steps toward a software trigger

    Get PDF

    Evolution and use cases of FairMQ

    Get PDF

    LNCS

    Get PDF
    This paper presents Aligators, a tool for the generation of universally quantified array invariants. Aligators leverages recurrence solving and algebraic techniques to carry out inductive reasoning over array content. The Aligators’ loop extraction module allows treatment of multi-path loops by exploiting their commutativity and serializability properties. Our experience in applying Aligators on a collection of loops from open source software projects indicates the applicability of recurrence and algebraic solving techniques for reasoning about arrays

    Operational semantics for declarative networking

    Get PDF
    Declarative Networking has been recently promoted as a high-level programming paradigm to more conveniently describe and implement systems that run in a distributed fashion over a computer network. It has already been used to implement various networked systems, e.g., network overlays, Byzantine fault tolerance protocols, and distributed hash tables. Declarative Networking relies upon a rule-based programming language that resembles Datalog and allows one to declaratively specify the flow of networking events. However, the presence of asynchronous communication, distribution, and imperative modification of the program state in Declarative Networking applications have been an obstacle for defining its semantics. Currently, the reference semantics is determined by the runtime environment only, which hinders further application development and makes any efforts to develop program analysis and verification tools impossible. In this paper, we propose an operational semantics for Declarative Networking that addresses these problems. The semantics is parameterized to keep open a design space required at the current stage of the language development. We also report on our first experience with an interpreter for Declarative Networking applications that implements the proposed semantics

    Spatial Interpolants

    Full text link
    We propose Splinter, a new technique for proving properties of heap-manipulating programs that marries (1) a new separation logic-based analysis for heap reasoning with (2) an interpolation-based technique for refining heap-shape invariants with data invariants. Splinter is property directed, precise, and produces counterexample traces when a property does not hold. Using the novel notion of spatial interpolants modulo theories, Splinter can infer complex invariants over general recursive predicates, e.g., of the form all elements in a linked list are even or a binary tree is sorted. Furthermore, we treat interpolation as a black box, which gives us the freedom to encode data manipulation in any suitable theory for a given program (e.g., bit vectors, arrays, or linear arithmetic), so that our technique immediately benefits from any future advances in SMT solving and interpolation.Comment: Short version published in ESOP 201

    Applying Prolog to Develop Distributed Systems

    Get PDF
    Development of distributed systems is a difficult task. Declarative programming techniques hold a promising potential for effectively supporting programmer in this challenge. While Datalog-based languages have been actively explored for programming distributed systems, Prolog received relatively little attention in this application area so far. In this paper we present a Prolog-based programming system, called DAHL, for the declarative development of distributed systems. DAHL extends Prolog with an event-driven control mechanism and built-in networking procedures. Our experimental evaluation using a distributed hash-table data structure, a protocol for achieving Byzantine fault tolerance, and a distributed software model checker - all implemented in DAHL - indicates the viability of the approach

    Interpolation Properties and SAT-based Model Checking

    Full text link
    Craig interpolation is a widespread method in verification, with important applications such as Predicate Abstraction, CounterExample Guided Abstraction Refinement and Lazy Abstraction With Interpolants. Most state-of-the-art model checking techniques based on interpolation require collections of interpolants to satisfy particular properties, to which we refer as "collectives"; they do not hold in general for all interpolation systems and have to be established for each particular system and verification environment. Nevertheless, no systematic approach exists that correlates the individual interpolation systems and compares the necessary collectives. This paper proposes a uniform framework, which encompasses (and generalizes) the most common collectives exploited in verification. We use it for a systematic study of the collectives and of the constraints they pose on propositional interpolation systems used in SAT-based model checking
    • …
    corecore